Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add filters

Database
Language
Document Type
Year range
1.
Health Informatics J ; 28(4): 14604582221131198, 2022.
Article in English | MEDLINE | ID: covidwho-2064628

ABSTRACT

BACKGROUND: Radiology requests and reports contain valuable information about diagnostic findings and indications, and transformer-based language models are promising for more accurate text classification. METHODS: In a retrospective study, 2256 radiologist-annotated radiology requests (8 classes) and reports (10 classes) were divided into training and testing datasets (90% and 10%, respectively) and used to train 32 models. Performance metrics were compared by model type (LSTM, Bertje, RobBERT, BERT-clinical, BERT-multilingual, BERT-base), text length, data prevalence, and training strategy. The best models were used to predict the remaining 40,873 cases' categories of the datasets of requests and reports. RESULTS: The RobBERT model performed the best after 4000 training iterations, resulting in AUC values ranging from 0.808 [95% CI (0.757-0.859)] to 0.976 [95% CI (0.956-0.996)] for the requests and 0.746 [95% CI (0.689-0.802)] to 1.0 [95% CI (1.0-1.0)] for the reports. The AUC for the classification of normal reports was 0.95 [95% CI (0.922-0.979)]. The predicted data demonstrated variability of both diagnostic yield for various request classes and request patterns related to COVID-19 hospital admission data. CONCLUSION: Transformer-based natural language processing is feasible for the multilabel classification of chest imaging request and report items. Diagnostic yield varies with the information in the requests.


Subject(s)
COVID-19 , Radiology , COVID-19/diagnostic imaging , Humans , Natural Language Processing , Research Report , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL